Analysis of decision boundaries in linearly combined neural classifiers

نویسندگان

  • Kagan Tumer
  • Joydeep Ghosh
چکیده

Combining or integrating the outputs of several pattern classifiers has led to improved performance in a multitude of applications. This paper provides an analytical framework to quantify the improvements in classification results due to combining. We show that combining networks linearly in output space reduces the variance of the actual decision region boundaries around the optimum boundary. This result is valid under the assumption that the a posteriori probability distributions for each class are locally monotonic around the Bayes optimum boundary. In the absence of classifier bias, the error is shown to be proportional to the boundary variance, resulting in a simple expression for error rate improvements. In the presence of bias, the error reduction, expressed in terms of a bias reduction factor, is shown to be less than or equal to the reduction obtained in the absence of bias. The analysis presented here facilitates the understanding of the relationships among error rates, classifier boundary distributions, and combining in output space.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Robustness of classifiers to uniform $\ell\_p$ and Gaussian noise

We study the robustness of classifiers to various kinds of random noise models. In particular, we consider noise drawn uniformly from the `p ball for p ∈ [1,∞] and Gaussian noise with an arbitrary covariance matrix. We characterize this robustness to random noise in terms of the distance to the decision boundary of the classifier. This analysis applies to linear classifiers as well as classifie...

متن کامل

IJSRD - International Journal for Scientific Research & Development| Vol. 2, Issue 12, 2015 | ISSN (online): 2321-0613

Arrhythmia is a health problem and it is a drastic cause for many kinds of heart diseases. Some arrhythmia may even cause to death. The main objective of this work is to combine the decision of each classifier in an efficient way to improve the classification accuracy of the classifiers to predict the disease. As different classifiers provide different opinion to the target system, the combined...

متن کامل

Bayesian Linear Combination of Neural Networks

Classifier ensembles have been one of the main topics of interest in the neural networks, machine learning and pattern recognition communities during the past fifteen years [21,28,16,17,26,36,27,23,11]. They are currently one of the state of the art techniques available for the design of classification systems and an effective option to the traditional approach based on the design of a single, ...

متن کامل

Robustness of classifiers: from adversarial to random noise

Several recent works have shown that state-of-the-art classifiers are vulnerable to worst-case (i.e., adversarial) perturbations of the datapoints. On the other hand, it has been empirically observed that these same classifiers are relatively robust to random noise. In this paper, we propose to study a semi-random noise regime that generalizes both the random and worst-case noise regimes. We pr...

متن کامل

Popular Ensemble Methods: An Empirical Study

An ensemble consists of a set of individually trained classifiers (such as neural networks or decision trees) whose predictions are combined when classifying novel instances. Previous research has shown that an ensemble is often more accurate than any of the single classifiers in the ensemble. Bagging (Breiman, 1996c) and Boosting (Freund & Schapire, 1996; Schapire, 1990) are two relatively new...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:
  • Pattern Recognition

دوره 29  شماره 

صفحات  -

تاریخ انتشار 1996